胎儿超声(US)中胎盘的自动分割由于(i)(i)胎盘外观的高度多样性而具有挑战性我们禁止在妊娠晚期进行整个胎盘评估的观点。在这项工作中,我们通过多任务学习方法解决了这三个挑战,该方法结合了单个卷积神经网络中胎盘位置(例如,前,后部)和语义胎盘分段的分类。通过分类任务,模型可以从更大,更多样化的数据集中学习,同时在有限的训练集条件下提高分割任务的准确性。通过这种方法,我们研究了多个评估者的注释的变异性,并表明我们的自动分割(前胎盘的骰子为0.86,后胎盘的骰子为0.83),与观察者内和观察者间的变异性相比,我们的自动段性能达到了人级的性能。最后,我们的方法可以使用由三个阶段组成的多视图US采集管道提供整个胎盘分割:多探针图像采集,图像融合和图像分段。这会导致对较大结构(例如胎盘中的胎盘)的高质量分割,其图像伪像降低,这超出了单个探针的视野。
translated by 谷歌翻译
混合动力和端到端(E2E)自动语音识别(ASR)系统之间的基本建模差异在其中创造了巨大的多样性和互补性。本文研究了混合TDNN和构型E2E ASR系统的基于多通的逆转和交叉适应系统组合方法。在多通恢复中,最先进的混合动力LF-MMI训练有素的CNN-TDNN系统具有速度扰动,规格和贝叶斯学习隐藏单元供款(LHUC)扬声器的适应器,以在被恢复之前产生初始的N-tesk输出由扬声器适应构象异构体系统,使用2向跨系统得分插值。在交叉适应中,混合CNN-TDNN系统适用于构象异构体系统的1好的输出,反之亦然。在300小时的总机语料库上进行的实验表明,使用两种系统组合方法中的任何一个得出的组合系统都超过了单个系统。在NIST HUB5'00,RT03和RT03和RT02评估数据。
translated by 谷歌翻译
关节特征本质上是声信号失真的不变,并且已成功地纳入了为正常语音设计的自动语音识别(ASR)系统。它们在非典型任务领域(例如老年人和跨语言的言语无序)的实际应用通常受到从目标扬声器收集此类专家数据的困难。本文介绍了一种跨域和跨语性A2A反演方法,该方法利用了A2A模型中24小时TAL Corpus的平行音频,视觉和超声舌成像(UTI)数据,然后进行交叉训练和交叉训练。语言适用于两种语言的三个数据集:英语dementiabank pitt和antonese JCCOCC MOCA老年演讲Corpora;以及英语Torgo违反语音数据,以产生基于UTI的发音特征。 Experiments conducted on three tasks suggested incorporating the generated articulatory features consistently outperformed the baseline hybrid TDNN and Conformer based end-to-end systems constructed using acoustic features only by statistically significant word error rate or character error rate reductions up to 2.64%, 1.92% and数据增强和说话者适应后,绝对4.17%,7.89%和13.28%相对1.21%。
translated by 谷歌翻译
尽管针对正常语音的自动语音识别(ASR)技术取得了迅速的进展,但迄今为止,准确认识违反障碍和老年语音仍然是高度挑战的任务。由于这些用户中经常发现的移动性问题,很难为ASR系统开发收集大量此类数据。为此,数据增强技术起着至关重要的作用。与现有的数据增强技术相反,仅修改光谱轮廓的说话速率或整体形状,使用一组新颖的扬声器依赖(SD)生成对抗网络(Gan )本文基于数据增强方法。这些既可以灵活地允许:a)在可用的语音数据可用时修改时间或速度的正常语音光谱,并更接近受损说话者的扬声器; b)对于非平行数据,SVD分解了正常语音频谱基础特征,要转换为目标老年人说话者的特征,然后再与时间基础重组以生成最先进的TDNN的增强数据和构象体ASR系统培训。实验是针对四个任务进行的:英语Uapseech和Torgo违反语音语音Corpora;英国痴呆症皮特和广东话JCCOCC MOCA老年语音数据集。所提出的基于GAN的数据增强方法始终优于基线速度扰动方法,最多可在Torgo和Dementiabank数据上降低4.91%和3.0%的绝对速度(相对相对9.61%和6.4%)。应用基于LHUC的扬声器适应后,保留了一致的性能改进。
translated by 谷歌翻译
Different people speak with diverse personalized speaking styles. Although existing one-shot talking head methods have made significant progress in lip sync, natural facial expressions, and stable head motions, they still cannot generate diverse speaking styles in the final talking head videos. To tackle this problem, we propose a one-shot style-controllable talking face generation framework. In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio. Specifically, we first develop a style encoder to extract dynamic facial motion patterns of a style reference video and then encode them into a style code. Afterward, we introduce a style-controllable decoder to synthesize stylized facial animations from the speech content and style code. In order to integrate the reference speaking style into generated videos, we design a style-aware adaptive transformer, which enables the encoded style code to adjust the weights of the feed-forward layers accordingly. Thanks to the style-aware adaptation mechanism, the reference speaking style can be better embedded into synthesized videos during decoding. Extensive experiments demonstrate that our method is capable of generating talking head videos with diverse speaking styles from only one portrait image and an audio clip while achieving authentic visual effects. Project Page: https://github.com/FuxiVirtualHuman/styletalk.
translated by 谷歌翻译
Masked image modeling (MIM) has shown great promise for self-supervised learning (SSL) yet been criticized for learning inefficiency. We believe the insufficient utilization of training signals should be responsible. To alleviate this issue, we introduce a conceptually simple yet learning-efficient MIM training scheme, termed Disjoint Masking with Joint Distillation (DMJD). For disjoint masking (DM), we sequentially sample multiple masked views per image in a mini-batch with the disjoint regulation to raise the usage of tokens for reconstruction in each image while keeping the masking rate of each view. For joint distillation (JD), we adopt a dual branch architecture to respectively predict invisible (masked) and visible (unmasked) tokens with superior learning targets. Rooting in orthogonal perspectives for training efficiency improvement, DM and JD cooperatively accelerate the training convergence yet not sacrificing the model generalization ability. Concretely, DM can train ViT with half of the effective training epochs (3.7 times less time-consuming) to report competitive performance. With JD, our DMJD clearly improves the linear probing classification accuracy over ConvMAE by 5.8%. On fine-grained downstream tasks like semantic segmentation, object detection, etc., our DMJD also presents superior generalization compared with state-of-the-art SSL methods. The code and model will be made public at https://github.com/mx-mark/DMJD.
translated by 谷歌翻译
Recently, great progress has been made in single-image super-resolution (SISR) based on deep learning technology. However, the existing methods usually require a large computational cost. Meanwhile, the activation function will cause some features of the intermediate layer to be lost. Therefore, it is a challenge to make the model lightweight while reducing the impact of intermediate feature loss on the reconstruction quality. In this paper, we propose a Feature Interaction Weighted Hybrid Network (FIWHN) to alleviate the above problem. Specifically, FIWHN consists of a series of novel Wide-residual Distillation Interaction Blocks (WDIB) as the backbone, where every third WDIBs form a Feature shuffle Weighted Group (FSWG) by mutual information mixing and fusion. In addition, to mitigate the adverse effects of intermediate feature loss on the reconstruction results, we introduced a well-designed Wide Convolutional Residual Weighting (WCRW) and Wide Identical Residual Weighting (WIRW) units in WDIB, and effectively cross-fused features of different finenesses through a Wide-residual Distillation Connection (WRDC) framework and a Self-Calibrating Fusion (SCF) unit. Finally, to complement the global features lacking in the CNN model, we introduced the Transformer into our model and explored a new way of combining the CNN and Transformer. Extensive quantitative and qualitative experiments on low-level and high-level tasks show that our proposed FIWHN can achieve a good balance between performance and efficiency, and is more conducive to downstream tasks to solve problems in low-pixel scenarios.
translated by 谷歌翻译
Rigorous guarantees about the performance of predictive algorithms are necessary in order to ensure their responsible use. Previous work has largely focused on bounding the expected loss of a predictor, but this is not sufficient in many risk-sensitive applications where the distribution of errors is important. In this work, we propose a flexible framework to produce a family of bounds on quantiles of the loss distribution incurred by a predictor. Our method takes advantage of the order statistics of the observed loss values rather than relying on the sample mean alone. We show that a quantile is an informative way of quantifying predictive performance, and that our framework applies to a variety of quantile-based metrics, each targeting important subsets of the data distribution. We analyze the theoretical properties of our proposed method and demonstrate its ability to rigorously control loss quantiles on several real-world datasets.
translated by 谷歌翻译
Recently, large-scale pre-trained models have shown their advantages in many tasks. However, due to the huge computational complexity and storage requirements, it is challenging to apply the large-scale model to real scenes. A common solution is knowledge distillation which regards the large-scale model as a teacher model and helps to train a small student model to obtain a competitive performance. Cross-task Knowledge distillation expands the application scenarios of the large-scale pre-trained model. Existing knowledge distillation works focus on directly mimicking the final prediction or the intermediate layers of the teacher model, which represent the global-level characteristics and are task-specific. To alleviate the constraint of different label spaces, capturing invariant intrinsic local object characteristics (such as the shape characteristics of the leg and tail of the cattle and horse) plays a key role. Considering the complexity and variability of real scene tasks, we propose a Prototype-guided Cross-task Knowledge Distillation (ProC-KD) approach to transfer the intrinsic local-level object knowledge of a large-scale teacher network to various task scenarios. First, to better transfer the generalized knowledge in the teacher model in cross-task scenarios, we propose a prototype learning module to learn from the essential feature representation of objects in the teacher model. Secondly, for diverse downstream tasks, we propose a task-adaptive feature augmentation module to enhance the features of the student model with the learned generalization prototype features and guide the training of the student model to improve its generalization ability. The experimental results on various visual tasks demonstrate the effectiveness of our approach for large-scale model cross-task knowledge distillation scenes.
translated by 谷歌翻译
Crowd counting plays an important role in risk perception and early warning, traffic control and scene statistical analysis. The challenges of crowd counting in highly dense and complex scenes lie in the mutual occlusion of the human body parts, the large variation of the body scales and the complexity of imaging conditions. Deep learning based head detection is a promising method for crowd counting. However the highly concerned object detection networks cannot be well applied to this field for two main reasons. First, most of the existing head detection datasets are only annotated with the center points instead of bounding boxes which is mandatory for the canonical detectors. Second, the sample imbalance has not been overcome yet in highly dense and complex scenes because the existing loss functions calculate the positive loss at a single key point or in the entire target area with the same weight. To address these problems, We propose a novel loss function, called Mask Focal Loss, to unify the loss functions based on heatmap ground truth (GT) and binary feature map GT. Mask Focal Loss redefines the weight of the loss contributions according to the situ value of the heatmap with a Gaussian kernel. For better evaluation and comparison, a new synthetic dataset GTA\_Head is made public, including 35 sequences, 5096 images and 1732043 head labels with bounding boxes. Experimental results show the overwhelming performance and demonstrate that our proposed Mask Focal Loss is applicable to all of the canonical detectors and to various datasets with different GT. This provides a strong basis for surpassing the crowd counting methods based on density estimation.
translated by 谷歌翻译